1,617 research outputs found

    Sim2Real View Invariant Visual Servoing by Recurrent Control

    Full text link
    Humans are remarkably proficient at controlling their limbs and tools from a wide range of viewpoints and angles, even in the presence of optical distortions. In robotics, this ability is referred to as visual servoing: moving a tool or end-point to a desired location using primarily visual feedback. In this paper, we study how viewpoint-invariant visual servoing skills can be learned automatically in a robotic manipulation scenario. To this end, we train a deep recurrent controller that can automatically determine which actions move the end-point of a robotic arm to a desired object. The problem that must be solved by this controller is fundamentally ambiguous: under severe variation in viewpoint, it may be impossible to determine the actions in a single feedforward operation. Instead, our visual servoing system must use its memory of past movements to understand how the actions affect the robot motion from the current viewpoint, correcting mistakes and gradually moving closer to the target. This ability is in stark contrast to most visual servoing methods, which either assume known dynamics or require a calibration phase. We show how we can learn this recurrent controller using simulated data and a reinforcement learning objective. We then describe how the resulting model can be transferred to a real-world robot by disentangling perception from control and only adapting the visual layers. The adapted model can servo to previously unseen objects from novel viewpoints on a real-world Kuka IIWA robotic arm. For supplementary videos, see: https://fsadeghi.github.io/Sim2RealViewInvariantServoComment: Supplementary video: https://fsadeghi.github.io/Sim2RealViewInvariantServ

    Time-Contrastive Networks: Self-Supervised Learning from Video

    Full text link
    We propose a self-supervised approach for learning representations and robotic behaviors entirely from unlabeled videos recorded from multiple viewpoints, and study how this representation can be used in two robotic imitation settings: imitating object interactions from videos of humans, and imitating human poses. Imitation of human behavior requires a viewpoint-invariant representation that captures the relationships between end-effectors (hands or robot grippers) and the environment, object attributes, and body pose. We train our representations using a metric learning loss, where multiple simultaneous viewpoints of the same observation are attracted in the embedding space, while being repelled from temporal neighbors which are often visually similar but functionally different. In other words, the model simultaneously learns to recognize what is common between different-looking images, and what is different between similar-looking images. This signal causes our model to discover attributes that do not change across viewpoint, but do change across time, while ignoring nuisance variables such as occlusions, motion blur, lighting and background. We demonstrate that this representation can be used by a robot to directly mimic human poses without an explicit correspondence, and that it can be used as a reward function within a reinforcement learning algorithm. While representations are learned from an unlabeled collection of task-related videos, robot behaviors such as pouring are learned by watching a single 3rd-person demonstration by a human. Reward functions obtained by following the human demonstrations under the learned representation enable efficient reinforcement learning that is practical for real-world robotic systems. Video results, open-source code and dataset are available at https://sermanet.github.io/imitat

    Foreign Military Sales Supply Support: Is There a Better Way?

    Get PDF
    In today\u27s world of declining defense budgets, there is an increasing need for South Korea to ensure they obtain the best dollar value when procuring defense articles. With the increasing financial situation, the purpose of this thesis is to research a surrogate third party firm and determine to what degree South Korea and other foreign military sales customers obtain the best value for their money for follow-on support item procurements. South Korea has participated in the Parts and Repair Ordering System program since its inception. However, South Korea has received little, if any, feedback regarding lead time and cost performance from the Air Force Security Assistance Command. This study analyzed two variables, lead time and total unit cost, and compared these variables in two procurement systems to discover which one provides the best lead time performance for the total average unit price. The results of this analysis concluded that our surrogate third party was faster, though not significantly; however, this came at a high price for follow on support item procurement

    Field Test for Repellency of Cedarwood Oil and Cedrol to Little Fire Ants

    Get PDF
    Eastern redcedars (Juniperus virginiana L.) are an abundant renew- able resource and represent a potential source of valuable natural products that may serve as natural biocides. The aromatic wood can be extracted to obtain cedarwood oil (CWO) and critical carbon dioxide (CO2) extraction of eastern redcedars gives both high yields and high quality CWO. In this study, CO2-derived CWO and cedrol, the most abundant component of CWO, were field-tested for repellency against the little fire ant (LFA), Wasmannia auropunctata Roger, in a Hawaiian macadamia orchard. Field tests were conducted using chopsticks baited with peanut-butter placed in established LFA trails on macadamia tree trunks and branches. The chopsticks and any ants present were collected after ca. 24 hours and the number of ants determined by visual counting. Four treatments were compared: Hexane only control; mineral oil; CWO; and cedrol. Control chopsticks and chopsticks treated with mineral oil had very high numbers of ants and were statistically equivalent. The CWO-treated chopsticks had significantly fewer LFAs than all the other treatments. Chopsticks treated with cedrol had fewer ants than the control chopsticks but more than the chopsticks treated with CWO. This research suggests that CWO extracts from J. virginianna may provide a renewable source of a natural ant repellent and could help manage this invasive pest

    Fat suppression for ultrashort echo time imaging using a novel soft-hard composite radiofrequency pulse.

    Get PDF
    PurposeTo design a soft-hard composite pulse for fat suppression and water excitation in ultrashort echo time (UTE) imaging with minimal short T2 signal attenuation.MethodsThe composite pulse contains a narrow bandwidth soft pulse centered on the fat peak with a small negative flip angle (-α) and a short rectangular pulse with a small positive flip angle (α). The fat magnetization experiences both tipping-down and -back with an identical flip angle and thus returns to the equilibrium state, leaving only the excited water magnetization. Bloch simulations, as well as knee, tibia, and ankle UTE imaging studies, were performed to investigate the effectiveness of fat suppression and corresponding water signal attenuation. A conventional fat saturation (FatSat) module was used for comparison. Signal suppression ratio (SSR), defined as the ratio of signal difference between non-fat-suppression and fat-suppression images over the non-fat-suppression signal, was introduced to evaluate the efficiency of the composite pulse.ResultsNumerical simulations demonstrate that the soft-hard pulse has little saturation effect on short T2 water signals. Knee, tibia, and ankle UTE imaging results suggest that comparable fat suppression can be achieved with the soft-hard pulse and the FatSat module. However, much less water saturation is induced by the soft-hard pulse, especially for short T2 tissues, with SSRs reduced from 71.8 ± 6.9% to 5.8 ± 4.4% for meniscus, from 68.7 ± 5.5% to 7.7 ± 7.6% for bone, and from 62.9 ± 12.0% to 4.8 ± 3.2% for the Achilles tendon.ConclusionThe soft-hard composite pulse can suppress fat signals in UTE imaging with little signal attenuation on short T2 tissues
    corecore